27 research outputs found

    Automatische Erstellung von Objekthierarchien zum Ray Tracing von dynamischen Szenen

    Get PDF
    Ray tracing acceleration techniques most often consider only static scenes, neglecting the processing time needed to build the acceleration data structure. With the development of interactive ray tracing systems, this reconstruction time becomes a serious bottleneck if concerned with dynamic scenes. In this paper, we describe two strategies for effcient updating of bounding volume hierarchies (BVH) for scenarios with arbitrarily moving objects. The first exploits spatial locality in the object distribution for faster reinsertion of the moved objects. The second allows insertion and deletion of objects at almost constant time by using a hybrid system, which combines benefits from both spatial subdivision and BVHs. Depending on the number of moving objects, our algorithms adjust a dynamic BVH six to one hundred times faster than it would take to rebuild the complete hierarchy, while rendering times of the resulting hierarchy remain almost untouched.Beschleunigungstechniken für Ray Tracing (Strahlverfolgung) sind meist lediglich für statische Szenen ausgelegt, und wenig Aufmerksamkeit wird auf die Zeit gelegt, welche zur Erstellung der Beschleunigungsdatenstruktur benötigt wird. Mit der Entwicklung interaktiver Ray Tracing Systeme wird dieser Rekonstruktionszeit jedoch zum Flaschenhals, falls man mit dynamischen Szenen arbeitet. In diesem Report werden zwei Strategien für eine effiziente Aktualisierung von Bounding Volume Hierarchien vorgestellt, ausgelegt auf Szenarien mit beliebig bewegten Objekten. Die erste nutzt räumliche Lokalitäten in der Objektverteilung um den Einfügeprozess für bewegten Objekte zu verkürzen. Die zweite Methode erlaubt das Einfügen und Löschen von Objekten in nahezu konstanter Zeit, indem ein hybrides System verwendet wird, welches die Vorteile spatialer Datenstrukturen und Bounding Volume Hierarchien miteinander verknüpft. Abhängig von der Anzahl an bewegten Objekten, können unsere Algorithmen eine bestehende Bounding Volume Hierarchie sech bis hundertmal so schnell anpassen, wie ein kompletter Neuaufbau benötigen würde. Die benötigte Zeit zum Rendern der Szene bleibt jedoch nahezu unberührt im Vergleich

    Interaktive Erweiterung und Beleuchtungsrekonstruktion von Fotos mit Radiosity

    No full text
    Mit dem Radiosity-Verfahren ist es moeglich, den visuellen Realismus computergenerierter Szenen durch physikalische Beleuchtungssimulation zu verbessern. Im Bereich von Augmented Reality ist es nun wichtig, vorhandene Aufnahmen unter Beruecksichtigung der realen Beleuchtung beispielsweise durch das Einbringen von virtuellen Objekten zu veraendern. In dieser Arbeit wurde ein Verfahren zur Erweiterung von Fotos basierend auf einer Radiosity Simulation entwickelt. In die Radiosity Simulation werden zusaetzliche Objekte eingefuegt und das Ergebnis mit einer differentiellen Rendering-Technik ins Foto eingeblendet. Da im Normalfall die fuer eine Lichtsimulation notwendigen Parameter nicht vorliegen, wurden Verfahren zu deren Rekonstruktion aus Fotos entwickelt. Rekonstruiert werden kann die Geometrie der Szene mit der richtigen Kameraeinstellung sowie Eigenschaften der Materialien und der Lichtquelle. Virtuelle Objekte koennen interaktiv mit dem Mauszeiger im Foto bewegt werden und bei der Darstellung kann zwischen einer schnellen Loesung mit geringerer Qualitaet und einer langsamen Loesung mit hoher Qualitaet entschieden werden. Auch moeglich ist das Einbringen neuer Lichtquellen sowie die Aenderung der realen Lichtquellen und Materialien. Alle entwickelten Verfahren wurden in das Radiosity System GENESIS 2 integriert. Using the Radiosity method it is possible to increase the realism of computer-generated scenes with a physical lighting simulation. In Augmented Reality applications it is important to modify existing photographs by inserting virtual objects. The virtual objects should appear under the correct lighting conditions. In this thesis, a method for augmenting photographs based on a Radiosity simulation was developed. Additional objects are added to the scene and the result of the Radiosity simulation is displayed in the photograph using a differential rendering technique. Usually, the necessary parameters for a lighting simulation of the scene shown on the photograph do not exist. Required for a Radiosity simulation are the geometry of the scene and the camera parameters as well as light and material properties. Therefore, methods for reconstruction of these parameters from photographs were developed. Virtual objects can be moved interactively with the mousepointer in the photograph and for displaying the result the user can choose between a fast update with lower quality and a slow update with high quality. Other possibilities are adding new light sources and the modification of existing lights and materials. All algorithms were integrated in the Radiosity simulation system GENESIS 2

    Interaktive Erweiterung und Beleuchtungsrekonstruktion von Fotos mit Radiosity

    No full text
    Mit dem Radiosity-Verfahren ist es moeglich, den visuellen Realismus computergenerierter Szenen durch physikalische Beleuchtungssimulation zu verbessern. Im Bereich von Augmented Reality ist es nun wichtig, vorhandene Aufnahmen unter Beruecksichtigung der realen Beleuchtung beispielsweise durch das Einbringen von virtuellen Objekten zu veraendern. In dieser Arbeit wurde ein Verfahren zur Erweiterung von Fotos basierend auf einer Radiosity Simulation entwickelt. In die Radiosity Simulation werden zusaetzliche Objekte eingefuegt und das Ergebnis mit einer differentiellen Rendering-Technik ins Foto eingeblendet. Da im Normalfall die fuer eine Lichtsimulation notwendigen Parameter nicht vorliegen, wurden Verfahren zu deren Rekonstruktion aus Fotos entwickelt. Rekonstruiert werden kann die Geometrie der Szene mit der richtigen Kameraeinstellung sowie Eigenschaften der Materialien und der Lichtquelle. Virtuelle Objekte koennen interaktiv mit dem Mauszeiger im Foto bewegt werden und bei der Darstellung kann zwischen einer schnellen Loesung mit geringerer Qualitaet und einer langsamen Loesung mit hoher Qualitaet entschieden werden. Auch moeglich ist das Einbringen neuer Lichtquellen sowie die Aenderung der realen Lichtquellen und Materialien. Alle entwickelten Verfahren wurden in das Radiosity System GENESIS 2 integriert. Using the Radiosity method it is possible to increase the realism of computer-generated scenes with a physical lighting simulation. In Augmented Reality applications it is important to modify existing photographs by inserting virtual objects. The virtual objects should appear under the correct lighting conditions. In this thesis, a method for augmenting photographs based on a Radiosity simulation was developed. Additional objects are added to the scene and the result of the Radiosity simulation is displayed in the photograph using a differential rendering technique. Usually, the necessary parameters for a lighting simulation of the scene shown on the photograph do not exist. Required for a Radiosity simulation are the geometry of the scene and the camera parameters as well as light and material properties. Therefore, methods for reconstruction of these parameters from photographs were developed. Virtual objects can be moved interactively with the mousepointer in the photograph and for displaying the result the user can choose between a fast update with lower quality and a slow update with high quality. Other possibilities are adding new light sources and the modification of existing lights and materials. All algorithms were integrated in the Radiosity simulation system GENESIS 2

    Preserving shadow silhouettes in illumination‐driven mesh reduction

    Get PDF
    A main challenge for today’s renderers is the ever-growing size of 3D scenes, exceeding the capacity of typically available main memory. This especially holds true for graphics processing units (GPUs) which could otherwise be used to greatly reduce rendering time. A lot of the memory is spent on detailed geometry with mostly imperceptible influence on the final image, even in a global illumination context. Illumination-driven mesh reduction, a Monte Carlo–based global illumination simulation, steers its mesh reduction towards areas with low visible contribution. While this works well for preserving high-energy light paths such as caustics, it does have problems: First, objects casting shadows while not being visible themselves are not preserved, resulting in highly inaccurate shadows. Secondly, non-transparent objects lack proper reduction guidance since there is no importance gradient on their backside, resulting in visible over-simplification. We present a solution to these problems by extending illumination-driven mesh reduction with occluder information, focusing on their silhouettes as well as combining it with commonly used error quadrics to preserve geometric features. Additionally, we demonstrate that the combined algorithm still supports iterative refinement of initially reduced geometry, resulting in an image visually similar to an unreduced rendering and enabling out-of-core operation

    EUROGRAPHICS 2005 / J. Dingliana and F. Ganovelli Short Presentations Differential Photon Mapping- Consistent Augmentation of Photographs with Correction of all Light Paths Abstract

    No full text
    Augmenting images with consistent lighting is possible with differential rendering. This composition technique requires two lighting simulations, one simulation with only real geometry and another one with additional virtual objects. The difference of the two simulations can be used to modify the original pixel colors. The main drawback of differential rendering is that not all modified light paths can be displayed: The result of the lighting simulation is visible in a reflective object instead of the real environment, augmented with virtual objects. Moreover, many regions in the photograph remain unchanged and the same work is done twice without any visual effect. In this paper we present a new approach for augmenting a photograph with only a single photon mapping simulation. The changes in lighting introduced by a virtual object are directly simulated using a differential photon map. All light paths intersecting the virtual object are corrected. To demonstrate the correctness of our approach, we compare our simulation results with real photographs. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realis

    Real Time Augmentation of Omni-directional Images with Consistent Lighting

    No full text
    Abstract: Panoramic images can be used as an image-based method for navigating in a real environment. Recent publications have shown that it is possible to augment photographs with correct lighting. We combine these two approaches to an interactive tool with real-time modifications of the real scene maintaining consistent lighting. The possible changes are: Insertion of virtual objects, new light sources and changes of materials. The only user input is a high-dynamic-range photograph of a light probe. A 3D model is reconstructed from the light probe image using object-based reconstruction methods. The differential rendering for all user manipulations is performed with the latest generation of graphics hardware. Finally, we remove the restriction to a fixed viewpoint by projecting the environment map to the geometry in the fragment program and we present a hole-filling algorithm for the parts of the geometry which were not visible in the light probe imag

    Eurographics Symposium on Rendering (2007) Jan Kautz and Sumanta Pattanaik (Editors) Interactive Illumination with Coherent Shadow Maps

    No full text
    Figure 1: The left image shows a scene with two dragons (280k faces each), where light position, shape and color, as well as object position, orientation, and all material parameters can be manipulated freely (3.3 FPS). The middle image shows bump, diffuse and specular maps, all with physically plausible shadows (4.6 FPS). In the right image, a fixed local linear light of arbitrary shape is used (1.8 seconds). All images are rendered with an NVIDIA GF 8 at 1024 × 768 pixels. We present a new method for interactive illumination computations based on precomputed visibility using coherent shadow maps (CSMs). It is well-known that visibility queries dominate the cost of physically based rendering. Precomputing all visibility events, for instance in the form of many shadow maps, enables fast queries and allows for real-time computation of illumination but requires prohibitive amounts of storage. We propose a lossless compression scheme for visibility information based on shadow maps that efficiently exploits coherence. We demonstrate a Monte Carlo renderer for direct lighting using CSMs that runs entirely on graphics hardware. We support spatially varying BRDFs, normal maps, and environment maps — all with high frequencies, spatial as well as angular. Multiple dynamic rigid objects can be combined in a scene. As opposed to precomputed radiance transfer techniques, that assume distant lighting, our method includes distant lighting as well as local area lights of arbitrary shape, varying intensity, or anisotropic light distribution that can freely vary over time. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [COMPUTER GRAPHICS]: Three-Dimensional Graphics and Realism; I.3.3 [COMPUTER GRAPHICS]: Color, Shading, Shadowing and Textur

    Automatic Creation of Object Hierarchies for Ray Tracing of Dynamic Scenes

    Get PDF
    Ray tracing acceleration techniques most often consider only static scenes, neglecting the processing time needed to build the acceleration data structure. With the development of interactive ray tracing systems, this reconstruction time becomes a serious bottleneck if concerned with dynamic scenes. In this paper, we describe two strategies for ef cient updating of bounding volume hierarchies (BVH) for scenarios with arbitrarily moving objects. The rst exploits spatial locality in the object distribution for faster reinsertion of the moved objects. The second allows insertion and deletion of objects at almost constant time by using a hybrid system, which combines bene ts from both spatial subdivision and BVHs. Depending on the number of moving objects, our algorithms adjust a dynamic BVH six to one hundred times faster than it would take to rebuild the complete hierarchy, while rendering times of the resulting hierarchy remain almost untouched

    Approximating dynamic global illumination in image space

    No full text
    Figure 1: We generalize screen-space ambient occlusion (SSAO) to directional occlusion (SSDO) and one additional diffuse indirect bounce of light. This scene contains 537k polygons and runs at 20.4 fps at 1600×1200 pixels. Both geometry and lighting can be fully dynamic. Physically plausible illumination at real-time framerates is often achieved using approximations. One popular example is ambient occlusion (AO), for which very simple and efficient implementations are used extensively in production. Recent methods approximate AO between nearby geometry in screen space (SSAO). The key observation described in this paper is, that screen-space occlusion methods can be used to compute many more types of effects than just occlusion, such as directional shadows and indirect color bleeding. The proposed generalization has only a small overhead compared to classic SSAO, approximates direct and one-bounce light transport in screen space, can be combined with other methods that simulate transport for macro structures and is visually equivalent to SSAO in the worst case without introducing new artifacts. Since our method works in screen space, it does not depend on the geometric complexity. Plausible directional occlusion and indirect lighting effects can be displayed for large and fully dynamic scenes at real-time frame rates
    corecore